40 research outputs found

    Instrumentation of learning situation using automated speech transcription : A prototyping approach

    Get PDF
    International audienceThis paper presents the ongoing conception of a set of tools, based on live transcription of speech during lectures and designed to instrument traditional lectures as well as web conferences or hybrid learning situations. The toolset exploits speech and interactions taking place during courses, keeps track of them and facilitates their reuse both in students' studies and in future iterations of the course delivered by the teacher. Its goal is to help students stay focused on the teacher's explanations and offer them greater possibilities of interactions. The prototype was conceived with an approach based on the analysis of communicational and informational needs of the end users, especially in regard to the instrumentation possibilities offered by the innovative technologies considered in the project. In this paper, we detail the different tools produced in order to offer synchronous and asynchronous support to the learning activity. We describe a real-life test as well as changes brought to the device afterwards, and finally we describe the first experiment conducted with the device

    Privacy attacks for automatic speech recognition acoustic models in a federated learning framework

    Full text link
    This paper investigates methods to effectively retrieve speaker information from the personalized speaker adapted neural network acoustic models (AMs) in automatic speech recognition (ASR). This problem is especially important in the context of federated learning of ASR acoustic models where a global model is learnt on the server based on the updates received from multiple clients. We propose an approach to analyze information in neural network AMs based on a neural network footprint on the so-called Indicator dataset. Using this method, we develop two attack models that aim to infer speaker identity from the updated personalized models without access to the actual users' speech data. Experiments on the TED-LIUM 3 corpus demonstrate that the proposed approaches are very effective and can provide equal error rate (EER) of 1-2%.Comment: Submitted to ICASSP 202

    Sur la vérification du locuteur à partir de traces d’exécution de modèles acoustiques personnalisés

    Get PDF
    National audienceSpeaker personalized acoustic models are obtained from a global model by updating its parameters using speaker's data. An important question is whether access to these personalized models allows to easily build an attack to identify the associated speaker. This problem is especially important in the context of federated learning of speech recognition acoustic models where a global model is learnt on the server using the updates received from multiple clients. We propose an approach to analyze information in neural network acoustic models based on a neural network footprint on the so-called indicator dataset. Using this method, we develop two very effective attack models that allow to infer speaker identity from the updated personalized models without access to the users’ speech data.Les modèles acoustiques personnalisés sont construits par entraînement à partir de données provenant d’un locuteur unique en raffinant un modèle générique. Une question importante est de savoir si l’accès à ces modèles personnalisés permet facilement de construire une attaque permettant d’identifier le locuteur associé. Ce problème est important dans le contexte de l’apprentissage fédéré de modèles pour la reconnaissance de la parole où un modèle global est appris sur un serveur à partir des modifications des paramètres des modèles reçues de plusieurs clients. Nous proposons une méthode qui consiste à construire des empreintes de ces modèles à partir des traces de leur application sur un jeu de données fixe et indépendant que nous appelons indicateur. Grâce à ces empreintes, nous développons deux modèles d’attaques très efficaces qui visent à inférer l’identité du locuteur

    LeBenchmark: A Reproducible Framework for Assessing Self-Supervised Representation Learning from Speech

    Full text link
    Self-Supervised Learning (SSL) using huge unlabeled data has been successfully explored for image and natural language processing. Recent works also investigated SSL from speech. They were notably successful to improve performance on downstream tasks such as automatic speech recognition (ASR). While these works suggest it is possible to reduce dependence on labeled data for building efficient speech systems, their evaluation was mostly made on ASR and using multiple and heterogeneous experimental settings (most of them for English). This questions the objective comparison of SSL approaches and the evaluation of their impact on building speech systems. In this paper, we propose LeBenchmark: a reproducible framework for assessing SSL from speech. It not only includes ASR (high and low resource) tasks but also spoken language understanding, speech translation and emotion recognition. We also focus on speech technologies in a language different than English: French. SSL models of different sizes are trained from carefully sourced and documented datasets. Experiments show that SSL is beneficial for most but not all tasks which confirms the need for exhaustive and reliable benchmarks to evaluate its real impact. LeBenchmark is shared with the scientific community for reproducible research in SSL from speech.Comment: Will be presented at Interspeech 202

    LeBenchmark 2.0: a Standardized, Replicable and Enhanced Framework for Self-supervised Representations of French Speech

    Full text link
    Self-supervised learning (SSL) is at the origin of unprecedented improvements in many different domains including computer vision and natural language processing. Speech processing drastically benefitted from SSL as most of the current domain-related tasks are now being approached with pre-trained models. This work introduces LeBenchmark 2.0 an open-source framework for assessing and building SSL-equipped French speech technologies. It includes documented, large-scale and heterogeneous corpora with up to 14,000 hours of heterogeneous speech, ten pre-trained SSL wav2vec 2.0 models containing from 26 million to one billion learnable parameters shared with the community, and an evaluation protocol made of six downstream tasks to complement existing benchmarks. LeBenchmark 2.0 also presents unique perspectives on pre-trained SSL models for speech with the investigation of frozen versus fine-tuned downstream models, task-agnostic versus task-specific pre-trained models as well as a discussion on the carbon footprint of large-scale model training.Comment: Under submission at Computer Science and Language. Preprint allowe

    Reconnaissance de la parole dans un contexte de cours magistraux : évaluation, avancées et enrichissement

    No full text
    This thesis is part of a study that explores automatic transcription potential for the instrumentation of educational situations.Our contribution covers several axes.First, we describe the enrichment and the annotation of COCo dataset that we produced as part of the ANR PASTEL project.This corpus is composed of different lectures' videos. Each lecture is related to a particular field (natural language, graphs, functions ...).In this multi-thematic framework, we are interested in the problem of the linguistic adaptation of automatic speech recognition systems (ASR).The proposed language model adaptation is based both on the lecture presentation supports provided by the teacher and in-domain data collected automatically from the web.Then, we focused on the ASR evaluation problem.The existing metrics don't allow a precise evaluation of the transcriptions' quality.Thus, we proposed two evaluation protocols.The first one deals with an intrinsic evaluation, making it possible to estimate performance only for domain words of each lecture (IWER_Average).The second protocol offers an extrinsic evaluation, which estimates the performance for two tasks exploiting transcription: information retrieval and indexability.Our experimental results show that the global word error rate (WER) masks the gain provided by language model adaptation.So, to better evaluate this gain, it seems particularly relevant to use specific measures, like those presented in this thesis.As LM adaptation is based on a collection of data from the web, we study the reproducibility of language model adaptation results by comparing the performances obtained over a long period of time.Over a collection period of one year, we were able to show that, although the data on the Web changed in part from one month to the next, the performance of the adapted transcription systems remainedconstant (i.e. no significant performance changes), no matter the period considered.Finally, we are intersted on thematic segmentation of ASR output and alignment of slides with oral lectures.For thematic segmentation, the integration of slide's change information into the TextTiling algorithm provides a significant gain in terms of F-measure.For alignment of slides with oral lectures, we have calculated a cosine similarity between the TF-IDF representation of the transcription segments andthe TF-IDF representation of text slides and we have imposed a constraint torespect the sequential order of the slides and transcription segments.Also, we have considered a confidence measure todiscuss the reliability of the proposed approach.Cette thèse s’inscrit dans le cadre d’une étude sur le potentiel de la transcription automatique pour l'instrumentation de situations pédagogiques.Notre contribution porte sur plusieurs axes. Dans un premier temps, nous décrivons l'enrichissement et l'annotation du corpus COCo que nous avons réalisés dans le cadre du projet ANR PASTEL.Ce corpus est composé de vidéos de différents cours magistraux, chacun étant spécialisé dans un domaine particulier (langage naturel, graphes, fonctions...).Dans ce cadre multi-thématiques, nous nous sommes ensuite intéressés à la problématique de l'adaptation linguistique des systèmes de reconnaissance automatique de la parole (SRAP). La proposition d'adaptation des modèles s'appuie à la fois sur les supports de présentation de cours fournis par les enseignants et sur des données spécialisées récoltées automatiquement à partir du web.Puis, nous nous sommes focalisés sur la problématique de l'évaluation des SRAP, les métriques existantes ne permettant pas une évaluation précise de la qualité des transcriptions dans un cadre applicatif déterminé. Ainsi, nous avons proposé deux protocoles d'évaluation. Le premier porte sur une évaluation intrinsèque, permettant d'estimer la performance seulement pour des mots spécialisés de chacun des cours (IWER_Average). D'autre part, nous proposons une évaluation extrinsèque, qui estime la performance pour deux tâches exploitant la transcription: la recherche d'informations et l'indexabilité.Nos résultats expérimentaux montrent que le taux d'erreurs-mots global (WER) masque les apports effectifs de l’adaptation des modèles de langage et prouve la nécessité d’utiliser de nouvelles mesures, telles que celles présentées dans ce manuscrit, pour évaluer l’apport réel de l’adaptation des modèles de langage.L'adaptation reposant sur une collecte de données issues du web, nous avons cherché à rendre compte de la reproductibilité des résultats sur l'adaptation de modèles de langage en comparant les performances obtenues sur une longue période temporelle.Nos résultats expérimentaux montrent que même si les données sur le web changent en partie d’une période à l’autre, la variabilité de la performance des systèmes de transcription adaptés est restée non significative à partir d'un nombre minimum de documents collectés.Enfin, nous avons proposé une approche permettant de structurer la sortie de la transcription automatique en segmentant thématiquement la transcription et en alignant la transcription avec les diapositives des supports de cours.Pour la segmentation, l'intégration de l'information de changement de diapositives dans l'algorithme TextTiling apporte un gain significatif en termes de F-mesure.Pour l'alignement, nous avons développé une technique basé sur des représentations TF-IDF en imposant une contrainte pour respecter l’ordre séquentiel des diapositives et des segments de transcription et nous avons vérifié la fiabilité de l'approche utilisée à l'aide d'une mesure de confiance

    Speech recognition in the context of lectures : assessment, progress and enrichment

    No full text
    Cette thèse s’inscrit dans le cadre d’une étude sur le potentiel de la transcription automatique pour l'instrumentation de situations pédagogiques.Notre contribution porte sur plusieurs axes. Dans un premier temps, nous décrivons l'enrichissement et l'annotation du corpus COCo que nous avons réalisés dans le cadre du projet ANR PASTEL.Ce corpus est composé de vidéos de différents cours magistraux, chacun étant spécialisé dans un domaine particulier (langage naturel, graphes, fonctions...).Dans ce cadre multi-thématiques, nous nous sommes ensuite intéressés à la problématique de l'adaptation linguistique des systèmes de reconnaissance automatique de la parole (SRAP). La proposition d'adaptation des modèles s'appuie à la fois sur les supports de présentation de cours fournis par les enseignants et sur des données spécialisées récoltées automatiquement à partir du web.Puis, nous nous sommes focalisés sur la problématique de l'évaluation des SRAP, les métriques existantes ne permettant pas une évaluation précise de la qualité des transcriptions dans un cadre applicatif déterminé. Ainsi, nous avons proposé deux protocoles d'évaluation. Le premier porte sur une évaluation intrinsèque, permettant d'estimer la performance seulement pour des mots spécialisés de chacun des cours (IWER_Average). D'autre part, nous proposons une évaluation extrinsèque, qui estime la performance pour deux tâches exploitant la transcription: la recherche d'informations et l'indexabilité.Nos résultats expérimentaux montrent que le taux d'erreurs-mots global (WER) masque les apports effectifs de l’adaptation des modèles de langage et prouve la nécessité d’utiliser de nouvelles mesures, telles que celles présentées dans ce manuscrit, pour évaluer l’apport réel de l’adaptation des modèles de langage.L'adaptation reposant sur une collecte de données issues du web, nous avons cherché à rendre compte de la reproductibilité des résultats sur l'adaptation de modèles de langage en comparant les performances obtenues sur une longue période temporelle.Nos résultats expérimentaux montrent que même si les données sur le web changent en partie d’une période à l’autre, la variabilité de la performance des systèmes de transcription adaptés est restée non significative à partir d'un nombre minimum de documents collectés.Enfin, nous avons proposé une approche permettant de structurer la sortie de la transcription automatique en segmentant thématiquement la transcription et en alignant la transcription avec les diapositives des supports de cours.Pour la segmentation, l'intégration de l'information de changement de diapositives dans l'algorithme TextTiling apporte un gain significatif en termes de F-mesure.Pour l'alignement, nous avons développé une technique basé sur des représentations TF-IDF en imposant une contrainte pour respecter l’ordre séquentiel des diapositives et des segments de transcription et nous avons vérifié la fiabilité de l'approche utilisée à l'aide d'une mesure de confiance.This thesis is part of a study that explores automatic transcription potential for the instrumentation of educational situations.Our contribution covers several axes.First, we describe the enrichment and the annotation of COCo dataset that we produced as part of the ANR PASTEL project.This corpus is composed of different lectures' videos. Each lecture is related to a particular field (natural language, graphs, functions ...).In this multi-thematic framework, we are interested in the problem of the linguistic adaptation of automatic speech recognition systems (ASR).The proposed language model adaptation is based both on the lecture presentation supports provided by the teacher and in-domain data collected automatically from the web.Then, we focused on the ASR evaluation problem.The existing metrics don't allow a precise evaluation of the transcriptions' quality.Thus, we proposed two evaluation protocols.The first one deals with an intrinsic evaluation, making it possible to estimate performance only for domain words of each lecture (IWER_Average).The second protocol offers an extrinsic evaluation, which estimates the performance for two tasks exploiting transcription: information retrieval and indexability.Our experimental results show that the global word error rate (WER) masks the gain provided by language model adaptation.So, to better evaluate this gain, it seems particularly relevant to use specific measures, like those presented in this thesis.As LM adaptation is based on a collection of data from the web, we study the reproducibility of language model adaptation results by comparing the performances obtained over a long period of time.Over a collection period of one year, we were able to show that, although the data on the Web changed in part from one month to the next, the performance of the adapted transcription systems remainedconstant (i.e. no significant performance changes), no matter the period considered.Finally, we are intersted on thematic segmentation of ASR output and alignment of slides with oral lectures.For thematic segmentation, the integration of slide's change information into the TextTiling algorithm provides a significant gain in terms of F-measure.For alignment of slides with oral lectures, we have calculated a cosine similarity between the TF-IDF representation of the transcription segments andthe TF-IDF representation of text slides and we have imposed a constraint torespect the sequential order of the slides and transcription segments.Also, we have considered a confidence measure todiscuss the reliability of the proposed approach

    Study on Acoustic Model Personalization in a Context of Collaborative Learning Constrained by Privacy Preservation

    No full text
    International audienceThis paper investigates different approaches in order to improve the performance of a speech recognition system for a given speaker by using no more than 5 min of speech from this speaker, and without exchanging data from other users/speakers. Inspired by the federated learning paradigm, we consider speakers that have access to a personalized database of their own speech, learn an acoustic model and collaborate with other speakers in a network to improve their model. Several local personalizations are explored depending on how aggregation mechanisms are performed. We study the impact of selecting, in an adaptive way, a subset of speakers's models based on a notion of similarity. We also investigate the effect of weighted averaging of fine-tuned and global models. In our approach, only neural acoustic model parameters are exchanged and no audio data is exchanged. By avoiding communicating their personal data, the proposed approach tends to preserve the privacy of speakers. Experiments conducted on the TEDLIUM 3 dataset show that the best improvement is given by averaging a subset of different acoustic models fine-tuned on several user datasets. Our approach applied to HMM/TDNN acoustic models improves quickly and significantly the ASR performance in terms of WER (for instance in one of our two evaluation datasets, from 14.84% to 13.45% with less than 5 min of speech per speaker)
    corecore